perm filename CONCEP.ADV[E76,JMC] blob
sn#227658 filedate 1976-07-26 generic text, type C, neo UTF8
COMMENT ā VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ARPA RELEVANCE OF RECENT RESULTS ON CONCEPTS
C00007 00003 Imagine a data-base system that handles procurement.
C00009 ENDMK
Cā;
ARPA RELEVANCE OF RECENT RESULTS ON CONCEPTS
Recently we obtained some results on a first order theory of
concepts with which we are quite pleased. They seem to open the
way to a much better logical treatment of knowledge, belief and
goal-seeking than has hitherto been possible in artificial
intelligence. The results haven't yet been published - a paper
will be submitted within two months, so the opinion of the AI
community is not available, but we regard it as a major breakthrough.
We understand, however, that the relevance of such results
to ARPA's goals has been questioned. This provides an occasion
to explain not only the result itself but also the relevance
of theory in general to applying research in artificial intelligence to
DoD problems.
Before this, let us emphasize that not all applications
require this new theory; people are already working on applications
of previous results. However, human-level general intelligence - the
computer analog of the atomic bomb - certainly requires it, and
it will have applications long before human-level general intelligence
is achieved.
As with most scientific theories, there is substantial chain
of reasoning between potential applications and the problems on which
we are currently working. However, unlike the nuclear physics example
before approximately 1939, the chain of reasoning is quite definite.
It goes like this:
1. Present programs exhibiting artificial intelligence
deal with limited subject matters and with rather easy problems.
There is good reason to suspect that present methods have
limitations in principle, although it is controversial what
these limits are.
2. In studying general intelligence, it pays to temporarily
separate the epistemological part of the problem from the heuristic.
What this terminology means is that we can study the general facts
about the world separately from the programming methods that use
the facts to solve problems.
3. In most difficult intellectual problems, the facts are not
all available, and part of the problem is to figure out how to
find the necessary facts. (Most AI work so far has concerned
problems in which the program has all the necessary facts, it
must only figure out how to use them to solve the problem).
4. Therefore, we need to represent facts about facts -
information about who knows what. In this, information about
what is unknown can be as important as information about what
is known. If the machine can conclude that it cannot answer a
certain question by reasoning, then it must decide to ask someone,
find a reference, or make an experiment.
5. We have made two recent discoveries - a new way of
expressing facts about knowledge, beliefs and goals.
and a formalism that permits proving that someone doesn't know
something under much wider conditions than was previously
possible.
Imagine a data-base system that handles procurement.
Suppose it not only responds to requests for information and
requests to enter information but also makes purchase orders
for items in short supply. Suppose further that it
makes purchases on behalf of projects that must be kept
secret. It must then reason about what information its
suppliers need to fulfill their orders, but it must also
reason about whether a pattern of orders will permit an
enemy to deduce the existence and goals of a secret project.
If it seems incredible to entrust such decisions to a computer,
suppose that the computer plays merely an advisory role
in which it examines the patterns of information release and
warns a human decision maker that a proposed action will
give away certain information.
In order to do this job it must reason that certain
facts can or cannot be deduced from the information
to be released.